Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Front Microbiol ; 14: 1250806, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38075858

RESUMEN

The human microbiome has become an area of intense research due to its potential impact on human health. However, the analysis and interpretation of this data have proven to be challenging due to its complexity and high dimensionality. Machine learning (ML) algorithms can process vast amounts of data to uncover informative patterns and relationships within the data, even with limited prior knowledge. Therefore, there has been a rapid growth in the development of software specifically designed for the analysis and interpretation of microbiome data using ML techniques. These software incorporate a wide range of ML algorithms for clustering, classification, regression, or feature selection, to identify microbial patterns and relationships within the data and generate predictive models. This rapid development with a constant need for new developments and integration of new features require efforts into compile, catalog and classify these tools to create infrastructures and services with easy, transparent, and trustable standards. Here we review the state-of-the-art for ML tools applied in human microbiome studies, performed as part of the COST Action ML4Microbiome activities. This scoping review focuses on ML based software and framework resources currently available for the analysis of microbiome data in humans. The aim is to support microbiologists and biomedical scientists to go deeper into specialized resources that integrate ML techniques and facilitate future benchmarking to create standards for the analysis of microbiome data. The software resources are organized based on the type of analysis they were developed for and the ML techniques they implement. A description of each software with examples of usage is provided including comments about pitfalls and lacks in the usage of software based on ML methods in relation to microbiome data that need to be considered by developers and users. This review represents an extensive compilation to date, offering valuable insights and guidance for researchers interested in leveraging ML approaches for microbiome analysis.

2.
Front Microbiol ; 14: 1257002, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37808321

RESUMEN

The rapid development of machine learning (ML) techniques has opened up the data-dense field of microbiome research for novel therapeutic, diagnostic, and prognostic applications targeting a wide range of disorders, which could substantially improve healthcare practices in the era of precision medicine. However, several challenges must be addressed to exploit the benefits of ML in this field fully. In particular, there is a need to establish "gold standard" protocols for conducting ML analysis experiments and improve interactions between microbiome researchers and ML experts. The Machine Learning Techniques in Human Microbiome Studies (ML4Microbiome) COST Action CA18131 is a European network established in 2019 to promote collaboration between discovery-oriented microbiome researchers and data-driven ML experts to optimize and standardize ML approaches for microbiome analysis. This perspective paper presents the key achievements of ML4Microbiome, which include identifying predictive and discriminatory 'omics' features, improving repeatability and comparability, developing automation procedures, and defining priority areas for the novel development of ML methods targeting the microbiome. The insights gained from ML4Microbiome will help to maximize the potential of ML in microbiome research and pave the way for new and improved healthcare practices.

3.
Sensors (Basel) ; 22(18)2022 Sep 17.
Artículo en Inglés | MEDLINE | ID: mdl-36146381

RESUMEN

Diagnosis of cardiovascular diseases is an urgent task because they are the main cause of death for 32% of the world's population. Particularly relevant are automated diagnostics using machine learning methods in the digitalization of healthcare and introduction of personalized medicine in healthcare institutions, including at the individual level when designing smart houses. Therefore, this study aims to analyze short 10-s electrocardiogram measurements taken from 12 leads. In addition, the task is to classify patients with suspected myocardial infarction using machine learning methods. We have developed four models based on the k-nearest neighbor classifier, radial basis function, decision tree, and random forest to do this. An analysis of time parameters showed that the most significant parameters for diagnosing myocardial infraction are SDNN, BPM, and IBI. An experimental investigation was conducted on the data of the open PTB-XL dataset for patients with suspected myocardial infarction. The results showed that, according to the parameters of the short ECG, it is possible to classify patients with a suspected myocardial infraction as sick and healthy with high accuracy. The optimized Random Forest model showed the best performance with an accuracy of 99.63%, and a root mean absolute error is less than 0.004. The proposed novel approach can be used for patients who do not have other indicators of heart attacks.


Asunto(s)
Aprendizaje Automático , Infarto del Miocardio , Electrocardiografía/métodos , Frecuencia Cardíaca , Humanos , Infarto del Miocardio/diagnóstico , Miocardio
4.
BMC Bioinformatics ; 23(1): 65, 2022 Feb 11.
Artículo en Inglés | MEDLINE | ID: mdl-35148679

RESUMEN

BACKGROUND: Microscopic examination of human blood samples is an excellent opportunity to assess general health status and diagnose diseases. Conventional blood tests are performed in medical laboratories by specialized professionals and are time and labor intensive. The development of a point-of-care system based on a mobile microscope and powerful algorithms would be beneficial for providing care directly at the patient's bedside. For this purpose human blood samples were visualized using a low-cost mobile microscope, an ocular camera and a smartphone. Training and optimisation of different deep learning methods for instance segmentation are used to detect and count the different blood cells. The accuracy of the results is assessed using quantitative and qualitative evaluation standards. RESULTS: Instance segmentation models such as Mask R-CNN, Mask Scoring R-CNN, D2Det and YOLACT were trained and optimised for the detection and classification of all blood cell types. These networks were not designed to detect very small objects in large numbers, so extensive modifications were necessary. Thus, segmentation of all blood cell types and their classification was feasible with great accuracy: qualitatively evaluated, mean average precision of 0.57 and mean average recall of 0.61 are achieved for all blood cell types. Quantitatively, 93% of ground truth blood cells can be detected. CONCLUSIONS: Mobile blood testing as a point-of-care system can be performed with diagnostic accuracy using deep learning methods. In the future, this application could enable very fast, cheap, location- and knowledge-independent patient care.


Asunto(s)
Aprendizaje Profundo , Algoritmos , Humanos , Microscopía , Redes Neurales de la Computación , Teléfono Inteligente
5.
PLoS Comput Biol ; 16(9): e1008095, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32881868

RESUMEN

Research publications and data nowadays should be publicly available on the internet and, theoretically, usable for everyone to develop further research, products, or services. The long-term accessibility of research data is, therefore, fundamental in the economy of the research production process. However, the availability of data is not sufficient by itself, but also their quality must be verifiable. Measures to ensure reuse and reproducibility need to include the entire research life cycle, from the experimental design to the generation of data, quality control, statistical analysis, interpretation, and validation of the results. Hence, high-quality records, particularly for providing a string of documents for the verifiable origin of data, are essential elements that can act as a certificate for potential users (customers). These records also improve the traceability and transparency of data and processes, therefore, improving the reliability of results. Standards for data acquisition, analysis, and documentation have been fostered in the last decade driven by grassroot initiatives of researchers and organizations such as the Research Data Alliance (RDA). Nevertheless, what is still largely missing in the life science academic research are agreed procedures for complex routine research workflows. Here, well-crafted documentation like standard operating procedures (SOPs) offer clear direction and instructions specifically designed to avoid deviations as an absolute necessity for reproducibility. Therefore, this paper provides a standardized workflow that explains step by step how to write an SOP to be used as a starting point for appropriate research documentation.


Asunto(s)
Métodos , Registros , Escritura/normas , Documentación , Humanos , Reproducibilidad de los Resultados , Proyectos de Investigación/normas , Flujo de Trabajo
6.
F1000Res ; 9: 1398, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33604028

RESUMEN

Today, academic researchers benefit from the changes driven by digital technologies and the enormous growth of knowledge and data, on globalisation, enlargement of the scientific community, and the linkage between different scientific communities and the society. To fully benefit from this development, however, information needs to be shared openly and transparently. Digitalisation plays a major role here because it permeates all areas of business, science and society and is one of the key drivers for innovation and international cooperation. To address the resulting opportunities, the EU promotes the development and use of collaborative ways to produce and share knowledge and data as early as possible in the research process, but also to appropriately secure results with the European strategy for Open Science (OS). It is now widely recognised that making research results more accessible to all societal actors contributes to more effective and efficient science; it also serves as a boost for innovation in the public and private sectors. However  for research data to be findable, accessible, interoperable and reusable the use of standards is essential. At the metadata level, considerable efforts in standardisation have already been made (e.g. Data Management Plan and FAIR Principle etc.), whereas in context with the raw data these fundamental efforts are still fragmented and in some cases completely missing. The CHARME consortium, funded by the European Cooperation in Science and Technology (COST) Agency, has identified needs and gaps in the field of standardisation in the life sciences and also discussed potential hurdles for implementation of standards in current practice. Here, the authors suggest four measures in response to current challenges to ensure a high quality of life science research data and their re-usability for research and innovation.


Asunto(s)
Disciplinas de las Ciencias Biológicas , Confianza , Cooperación Internacional , Metadatos , Calidad de Vida
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...